3D Gaussian Splatting from Hollywood Films!
HTML-код
- Опубликовано: 28 сен 2024
- Turning movie shots into 3D scenes using 3D Gaussian Splatting. We found movie clips online, converted scenes to image sequences, trained their GSPLATS, and brought them to UNREAL ENGINE 5.
3D Gaussian Splatting for Real-Time Radiance Field Rendering Paper:
repo-sam.inria...
Join our discord server:
/ discord
If you wanna see us to do cool things follow us here too:
Instagram: / badxstudio
Twitter: / badxstudio
TikTok: / badxstudio
LinkedIn: / badxstudio
Bad Decisions Podcast 🎙️:
podcasters.spo...
Our personal handles: (if you wanna stalk us)
/ farhad_sh
/ farazshababs
/ farhads__
/ farazshababi
/ farhadshababi
/ farazshababi
#unrealengine5 #3dgaussiansplatting #3dscan #gaussiansplatting #3d #3drender #nerf #unrealengine #blender3d #blender #drone #ai #photogrammetry #vfx #cgi #film
THIS IS WAYYY TOO INTERESTING TO MISS OUT. And Faraz casually being confidently wrong in 1980 being 33 years ago (when it's actually 43) is another highlight.
Imagine how cool it could be with 4DGaussians? It is a recent paper about gaussian splats, but with motion. Yes, you can recreate 3d dynamic scenes from videos with this technique!
yea code is already out!
so CGI will look more realistic now?
@@irfanadamm5819 its basically better for filmmakers in post. once the fidelity gets really good, you’re eliminating a lot of the DP’s job outside of lighting. If you’re able to change the focal length, camera movement and composition in post
We saw the paper... and we are going to try it for sure, we just have 2 more experiments with 3DGS, VR and Video Games
@@pva5142 Why would they even want to do that? It's completely inorganic to do that, and the "fix it in post" mentality is gradually being forced out, thankfully. Also "outside of lighting"... so only the most important element of the job.
2:18 it is even 43 years old. 😉 Wow! Great video as always guys!
I wish It was "just" 33 years ago :-D I born in 1981, so I wouldn't mind 10y shaved off :-D
Not the best math 😅
hahahha
Oooppsss ..Quick Math
haha The math was amazing!
2:33 guys are living in the 2013😅go grab that bitcoin asap
hahahha I wish bro
You can tell Gaussian Splatting is simple to do because even obnoxious loud people who can't spell 'Shining' can do it.
This is awfully like lucid dreams if you've had them. Very similar restrictions, like you can't go beyond a certain point where it becomes like an invisible border of less details or something obstructing your view
Wait really?? We’ve never had a lucid dream unfortunately. Is that truly how it feels?
I’ve never had a Lucid dream that had a barrier. Usually I can go wherever I like
@@luiginotcool if you used to fly in lucid dream you would have eventually noticed that you can't go beyond a certain altitude.
What tools are you using to generate the splats?
You guys are soooooo underrated.. the videos are entertaining and educational. Good job!
They are the next corridor crew for the era of AI 3D vfx
content is good, but their style is very annoying. cant stand all the forced wooooooows
Thanks so much buddy! Glad you enjoy them
Man the editing of this video is pure art! And the energy of both of you is contagious, you really got me into unreal 💪🏻
Thanks for trying out movies. I was so excited to see how that would turn out. It's a little trickier than I thought it would be, even with some pretty perfect scenes (limited movement within the scene and rotating camera). Video games into VR is a great next idea, since a rotating camera is as simple as moving the right joystick in most cases. I'd also like to see if you can figure out how to edit the capture, delete unnecessary artifacts, etc. Is that possible?
In our UE video, we showed how you can crop these scenes but we still don't have a precision tool for deleting each ellipsoid in UE (devs have already made it for Unit)
In the Trinity shot did you notice the doubling was gone? Also it was squashed, it's like the 3D point cloud stage undid the doubling by squashing them together causing the whole scene to get squashed horizontally.
Yeah we realized that too but in general it made it difficult for the algorithm to arrange the point clouds.
Great stuff guys always entertaining and learning while watching 🙌
Chris that's the goal ma man
Always blessed especially when these videos drop 💯
Good to hear that G
Just finished the full video. Your video production, knowledge, camera presence and humour will make your channel blow up.
Wowwww thank u so much buddy
First thought in my head. I think NEO bullet scene was the second thing I tried after I installed the software. Did not come out so well either. Gaussian Splatting can be used from now on for scenes like this for better effect.
I think the cool way I would use it would be to build 3d models from miniatures and sculptures for added realism, texture and flavor
Imagine taking one of those RUclips videos where someone filmed san fran or LA in 1920 and with the forward motion create a 3D splat or 4D splat. Somehow the forward video would have to be interpreted as side view for at least 180 view.
2:30 nice math bro
hahhaha ooppss
What's kind of cool, is even though AI is used to generate these, you can use some other AI to replace the missing "footage" -- for instance trees and mountain behind. (Sure it wouldn't necessarily be exactly what's actually behind the hotel, but it could generate lost information to give you a reasonable facsimile of what might haave been behind it
We were talking about it after watching Adobe Max, it would be dope to try the video in painting feature. This is definitely something that eventually will be part of the process.
YALL ARLREADY ON A WHOLE NEW ERA!
Glad you think so mate
that particular shot in the shining is from the timberline lodge in mount hood Oregon, USA
That shot from the Shining is taken at the top of Mt. Hood ski resort in Oregon. There is a glacier there where you can ski even in the summer time. When I skied there, I instantly recognized the lodge as the one used in all the exterior shots in the Shining.
Oh that is so cool to hear!! How was the ski experience there tho?
@@badxstudioI trained there with the Swedish and Korean national teams during the summer of 1991. Glacier skiing in the summer isn’t the best when compared to normal winter powder skiing, but its good for training on the off season.
That's crazy, very impressing!
I wonder how good these results would be if you exactly mirrored the actual video camera path and tracking, and then created 3d models from the point-in-time gaussian splat viewpoints that intersect across all frames.
That is a cool idea in theory! we will have to test that to see if it will actually work tho!
@@badxstudio keep me updated!
You guys beat me to it!!! I'm wondering how Gaussian Splatting would go with stereoscopic side by side video at full frame.
Listening to this on 10% volume and it still feels like I'm going to get a noise complaint
Hahahahahahahah 😜
can the gsplats cast and/or receive shadows? like if you put a cube in the middle of the table in that one scene and cast a light, will the table receive a shadow?
If I were going to have a talk with host and guest I assume I would do a 360 of them in their chairs first before a 2D video interview was started in order to create a 4D splat of the 2D video? How would you do this?
Loads of Jerry Bruckheimer films you could have used.
That's so cool!
Couldn't agree more!
Please, do it in VR.
How did you guess the next video title? :D :D :D :D :D
Yes!
holy moly, turn it down a few notches lads !
The weirdness you get from trinity in the matrix is framerate/telecine conversion errors. A properly inverse-telecine'd dvd or non-rate-adjusted 23.976 bluray source would be the ideal source for this shot.
Did he say the Shining helicopter scene was shot 20 years ago? .. Ir's actually almost 50 years ago (approx. 43 - 45) .. :-)
Hey can you please try this on apollo 11 or other missions? I tried it with apollo 11, but only with the images. But they aren't overlapping enough :( Is it possible somehow to tell the 3DGS, where each photo was taken and the angle? would that help?
Haha nice, I was thinking of something related, can you generate enough consistent ai images to create a 3d gaussian splat of something novel, mine was The Space Needle rising out of a foggy forest instead of in the city, but I couldn't get enough consistency to generate a point cloud. Someday soon maybe.
You would need some drone footage of the space needle, I think. What were you using?
@@joelface yeah I was trying to learn Colmap, I was thinking of you could get enough images of roughly the same object, it might be able to stitch something together. Idk if I'm fundamentally misunderstanding that process or the images I had weren't consistent enough.
Does it need something like footage where you can get dense frames?
@@leptok3736 Footage is best because it's all the same exposure, same lens, same time of day, and then consistent path of depth for the software to build around. If you're just uploading a bunch of random images of the same object, there could be a lot of differences that the software can't figure out what to do with.
I think static and long views of fully CG scenes are probably the best bets of getting anything out of this.
Which is at least interesting for the idea that you could then take old somewhat subpar CG shots and recreate the scenes with the same camera movement. At least until it's possible to account for the moving elements completely.
Video games are cool and all as a use case but most of them are already 3d and can have their assets ripped, so the only real use case with that is putting funny things inside the scenes.
At this point I am not even surprised to see what it can do
Hahahahahaha I know right!
Would it work for scenes where cameras move across or remain stationary? I'm thinking of using this method to colorize black and white footage, so I'm just wondering
Camera needs to be moving to show clear depth! As of now this Gaussian Splatting tech works on finding points of interest, and if you are not moving your camera the right way you might fail your training!
@@badxstudio All I need really is for the scenes to be rendered exactly how they are for what I want to do, as I'd like to edit each object individually
I'm sure it's gonna be doable! You just have to test it out!@@thevfxmancolorizationvfxex4051
@@thevfxmancolorizationvfxex4051 I think there are other methods you could use for recolorizing old footage. I don't think gsplats will be the right way to do so, especially with stationary camera.. it can't figure out the depth of a scene without multiple viewpoints. It doesn't create any data that isn't already in the video.
Brilliant idea.👍
Thanks buddy
"That movie was made in the 80s, that was 33 years ago" what. But then again, you can tell these guys aren't the brightest tool in the shed. They are marveling at something your phone has been able to do for a decade now
43 years ago, my friends 😅
Hahahaha fml … we failed maths
One Love!
Always forward, never ever backward!!
☀☀☀
💚💛❤
🙏🏿🙏🙏🏼
Lets gooooo!!!
this technique would be great for an overhead ring of cameras or partial ring of cameras synchronized similar to the matrix shots - not to take the single output video from per se as you did, but for the filmmakers being able to relight the scene in post
Lol for that first shot you have yo undo the anamorphic squeeze or it looks funny
1980 was 43 years ago
oooppsss
3:42 now consider the next leap forward where it will intelligently create all the parts we never had a camera record.
Yes exactly!!
looks like a visualization of memorys
This is mindblowing
100% agreed
Imagine this thing being able to record every frame properly (instead of a few) and you could literally pause in any of them and look at them from any angle without losing detail. If anything, they should be able to fill in the blanks with AI
Yeah that's the thought! We like to believe that very soon AI will b able to help in generating the missing detail and create a complex scene that looks stunning!
I really love the work you guys do. Thank you ❤
Our pleasure!
you guys crazy for this one. Stanley Kubricks ghost is going to be up and snooping 👻🎥🧿
hahaha hope he is not angry.. we love his movies
How much could this change 3D films in VR?
We are brining these 3D scenes into VR in the next video. Gonna check the performance and quality ;)
amazing ideas!
One more thing. Say I want to scan a building with a drone. Could I take interior shots and combine them for an outside and inside model? I assume once loaded into Unity or whatever you could piece the separate splats together.
Good work!
Thank you! Cheers!
13:46 it's Trinity calling, because you called her
OH SHIT!!!! WE JUST REALISED THAT hahahahahha imagine if we added that to the script
@@badxstudio ahahaha
How did you convert the Gaussians to geometry to work in unreal engine?
we used a plugin to convert them into Niagara Particles.
Fantastic stuff, I had no clue about this technology until I came across your channel guys. Parcham Balast
Love you dadash!
you blew my mind at Inglorious Basterds came out 15 years ago
🤯
43 years
ooopppss sorry hahah
this is better than HDRI and can be used for behind objects, with time it will get better. but how it is different from photogrammtery ??
Well as for photogrammetry you get geometry with actual polygons which can be useful for things such as collision! There are other aspects to consider too of course :)
Bro this is crazy, future flim making is gonna bring up more creative in pictures 🔥
video games will be a lot easier since: no lens distortion, no lens blur, no motion blur, no lens aberrations, more FPS... basically just easier to do structure from motion on video game footage in almost every way, except for the HUD which would need to be removed.
We are testing that too! Exactly it’s gonna be much simpler! Thinking about cyberpunk maybe for the test! Any recommendations?
The Neo shot works but you need like a splat editor to remove the artifacts.
consider this is tech in it's infancy, imagine where we'll be 1 or 5 years from now.
looking awesome!
So I have been using Luma interactive scenes. Its pretty good but I know you all mention in a previous video that you can get more control with the method your using by letting it go through more variations. I have a Gpu with a 4090. Do you think its worth doing the setup to get the extra fidelity? Also how long are you all usually training a splat?
Super cool! I've been wanting to try the same thing with the matrix and NeRFs for a while now.
Right???? it's super exciting!!
there is a scene from sherlock holmes. where Watson is getting married. everything is frozen and it was shot from different angles. you can probably make a good 3d scene out of it.
The movie or TV Series? tell us exactly which one and we are going to do it haha
dude 1980 - 2023 = 43 years ago
Hahhahah yeah we realised that :D
Why did you guys not cut out the top and bottom widescreen bars? Great video otherwise, though, very interesting use of Gaussian Splats and Novel View Synthesis techniques in general, though!
Y'all are CRAZY and I love it
hahaha thanks buddy
VLC FTW! and if you wanted to know... the Shining LOCATION was filmed at the Stanley Hotel in Estes Park Colorado. =)
also... 2023-1980=43 =)
Have you been to the shining location?
yep! stayed there. stayed in room 217 even. =) hiked the mountains around it, swam naked in the pool at 2am, walked the stairs, listened to ghost stories, had an AMAZING time. also... turns out the stairs in the Stanley are ALSO where they shot the scene in Dumb and Dumber where the boys are fighting trying to get to the top. @@badxstudio
Ok guys I am convinced I am also going to do this stuff, hopefully I will write a worthy research paper about Gaussian Splatting.
Go for it mate.. you won't regret it
You guys even explained yourselves why the Batman shot doesnt work: its zooming. changing focal lengths are not supported by COLMAP. You would need to solve the variable focal length in a software like Syntheyes and remove the focal length by scaling the image down inside the frame as the footage zooms in. this would also require you to remove the variable lens distortion present in most zoom lenses. its a complex thing to do.
Yessir!! U got that right 🙏🫡
You're videos are great
We appreciate that!
That's actually awsome !! Gg 👏👏 it brings some much exploration to enjoy, understand a scene.... The setting, the composition, the atmosphere.... Super video, actually as you said in a VR viewer it would be craaazy
Thanks a lot! Glad you liked it buddy...yeah VR is next
What's the hardware specs you're using?
Rtx 4090
This reminds me of Cyberpunk 2077's braindance tech. If you can recreate scenes from movies, you can do the same from a vlog so crazy. Digital forensics 😯.
yess!! EXACTLY!! Braindance was so cooolll ... can't wait for Gaussian Splats to be dynamic
Keep ‘em coming!
will do ;)
Can you run a mesh to metahuman on a head out of this? that would be ultra dope.
Good shlt.very interesting.
Someday you guys will get a viral +1M video. You deserve.
YESSSIRRRRR!!! And when we do ... we shall celebrate together
and turn them into 3d vr movies?
Eventually 😜
Bad ass!
Right???
This will open a new realm of shitposts
Have you tried to "de-blur" source images by restoring using AI? If we have some temporal adherence that would probably give some amazing results when fed into the pipeline
We tested Topaz to enhance the photos and even though they looked better for some reason it did not work out to create better results.
The Harry Potter clip with the Hogward Castle can be upscaled and given more detail using AI on each frame. Then the render would look really awesome!
TBH we tried Topaz upscaling on another scan of ours and it didn't help with the training of that for some reason. We will have to try again for movies maybe it will work!!
From the Inglorious bastards movie. What about selecting the actors from all frames, and make sure that they always are from one or more frames that they never move in?!
Some PS Content-Aware Fill or AI to fill inn areas around them and on the actors that will might be gone or visble on those frame because the angle of the camera change.
Perhaps you guys could use AI to generate pictures of them from those other angles to fake then not moving, but only the camera move?!
Then try to render this scene again to avoid getting artifacts on them.
Omg combining this with some of the stuff Adobe has been doing lately to "fill in the cracks and Upscale... You'll be able to pull flawless backgrounds, remove distractions, fix faces and other moving items automatically, add items, repose and recompose your shots with no effort at all.
You said it best! Will definitely be trying that all out
I'm sooo hype. You two have THE best energy!@@badxstudio
A great use of AI would be to fill in the areas the camera cant see. Bit like photoshops AI generative fill for 3d.
we were thinking about the same thing... it's coming for sure
I guess you mean "43 years ago"? ...Great math!
Do you know what they will enscribe on your tombstone?
"33 YEARS AGO"
hahaha RUclips will never forgive us
That math, though!! Try 43, lol
oooppps
Here's my pitch
New Industry - Virtual Reality Movies | See "Ready Player One"
Allow people to interact with old movies
Mode A. Casual observers walking around the scene
Mode B. "Karaoke Mode" - Protagonist say lines on cue and be in the scenes on cue - "a la" Just Dance Scoring points
Mode C. Party Mode - friends play supporting characters
Mode D. Multiverse Mode - GTA worlds of the entire movie available to explore RPG style where user can randomly switch Modes A - C
Yo this is such a cool idea!! Totally can see this happening hahaha so cool
@@badxstudioif you have the bandwidth I have some capital resources we could create a prototype and go from there
Naturally, the plural of princess is princesses.
"The principal had principles of a prince, but the princesses paid prices, and his processes preceded many crisis."
It's a stupid made-up language.
Lol thanks for clarifying that hahahaha
Such a great content 👏🏻🔥🔝
Love you broski!
The limit is the Quality ;-)
Give it some time, it will catch up
@@badxstudio not for that use-case :-) only AI can create something that isn't in the original footage (resolution/occlusions) :-)
I wanna do the Google maps and do the shoots my self
U mean u want to gaussian the google maps in unreal?
This is exactly like the braindance sequence from Cyberpunk 2077..
PRECISELY!!!
Just wait till you see us taking this to VR in the next video. We actually scaled the sprite sizes in the sequencer so they suddenly enlarge and create the scene, it looks so similar to braindance!
if the shinning was filmed in 1980, wouldnt it be 43 years ago?
yeahhh we fked that up :D :D Quick MATHSSSS